skip to main content


Search for: All records

Creators/Authors contains: "Howard, Ayanna"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. As the influence of social robots in people’s daily lives grows, research on understanding people’s perception of robots including sociability, trust, acceptance, and preference becomes more pervasive. Research has considered visual, vocal, or tactile cues to express robots’ emotions, whereas little research has provided a holistic view in examining the interactions among different factors influencing emotion perception. We investigated multiple facets of user perception on robots during a conversational task by varying the robots’ voice types, appearances, and emotions. In our experiment, 20 participants interacted with two robots having four different voice types. While participants were reading fairy tales to the robot, the robot gave vocal feedback with seven emotions and the participants evaluated the robot’s profiles through post surveys. The results indicate that (1) the accuracy of emotion perception differed depending on presented emotions, (2) a regular human voice showed higher user preferences and naturalness, (3) but a characterized voice was more appropriate for expressing emotions with significantly higher accuracy in emotion perception, and (4) participants showed significantly higher emotion recognition accuracy with the animal robot than the humanoid robot. A follow-up study ([Formula: see text]) with voice-only conditions confirmed that the importance of embodiment. The results from this study could provide the guidelines needed to design social robots that consider emotional aspects in conversations between robots and users. 
    more » « less
  2. The attribution of human-like characteristics onto humanoid robots has become a common practice in Human-Robot Interaction by designers and users alike. Robot gendering, the attribution of gender onto a robotic platform via voice, name, physique, or other features is a prevalent technique used to increase aspects of user acceptance of robots. One important factor relating to acceptance is user trust. As robots continue to integrate themselves into common societal roles, it will be critical to evaluate user trust in the robot's ability to perform its job. This paper examines the relationship among occupational gender-roles, user trust and gendered design features of humanoid robots. Results from the study indicate that there was no significant difference in the perception of trust in the robot's competency when considering the gender of the robot. This expands the findings found in prior efforts that suggest performance-based factors have larger influences on user trust than the robot's gender characteristics. In fact, our study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender. As such, gendering in robot design should be considered critically in the context of the application by designers. Such precautions would reduce the potential for robotic technologies to perpetuate societal gender stereotypes. 
    more » « less
  3. The purpose of this study was to survey the perspectives of clinicians regarding pediatric robotic exoskeletons and compare their views with the views of parents of children with disabilities. A total of 78 clinicians completed the survey; they were contacted through Children’s Healthcare of Atlanta, the American Academy for Cerebral Palsy and Developmental Medicine, and group pages on Facebook. Most of the clinicians were somewhat concerned to very concerned that a child might not use the device safely outside of the clinical setting. Most clinicians reported that the child would try to walk, run, and climb using the exoskeleton. The parents reported higher trust (i.e., lower concern) in the child using an exoskeleton outside of the clinical setting, compared to the clinician group. Prior experience with robotic exoskeletons can have an important impact on each group’s expectations and self-reported level of trust in the technology. 
    more » « less
  4. Modern societies rely extensively on computing technologies. As such, there is a need to identify and develop strategies for addressing fairness, ethics, accountability, and transparency (FEAT) in computing-based research, practice, and educational efforts. To achieve this aim, a workshop, funded by the National Science Foundation, convened a working group of experts to document best practices and integrate disparate approaches to FEAT. The working group included different disciplines, demographics, and institutional types, including large research-intensive universities, Historically Black Colleges and Universities, Hispanic-Serving Institutions, teaching institutions, and liberal arts colleges. The workshop brought academics and members of industry together along with government representatives, which is vitally important given the role and impact that each sector can have on the future of computing. Relevant insights were gained by drawing on the experience of policy scholars, lawyers, statisticians, sociologists, and philosophers along with the more traditional sources of expertise in the computing realm (such as computer scientists and engineers). The working group examined best practices and sought to articulate strategies for addressing FEAT in computing-based research and education. This included identifying methodological approaches that researchers could employ to facilitate FEAT, instituting guidelines on what problem definition practices work best, and highlighting best practices for data access and data inclusion. The resulting report is the culmination of the working group activities in identifying systematic methods and effective approaches to incorporate FEAT considerations into the design and implementation of computing artifacts. 
    more » « less
  5. Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
    more » « less
  6. In recent news, organizations have been considering the use of facial and emotion recognition for applications involving youth such as tackling surveillance and security in schools. However, the majority of efforts on facial emotion recognition research have focused on adults. Children, particularly in their early years, have been shown to express emotions quite differently than adults. Thus, before such algorithms are deployed in environments that impact the wellbeing and circumstance of youth, a careful examination should be made on their accuracy with respect to appropriateness for this target demographic. In this work, we utilize several datasets that contain facial expressions of children linked to their emotional state to evaluate eight different commercial emotion classification systems. We compare the ground truth labels provided by the respective datasets to the labels given with the highest confidence by the classification systems and assess the results in terms of matching score (TPR), positive predictive value, and failure to compute rate. Overall results show that the emotion recognition systems displayed subpar performance on the datasets of children's expressions compared to prior work with adult datasets and initial human ratings. We then identify limitations associated with automated recognition of emotions in children and provide suggestions on directions with enhancing recognition accuracy through data diversification, dataset accountability, and algorithmic regulation. 
    more » « less